Ece Gumusel, Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington.
Conversational AI presents significant privacy challenges due to the complex interplay between user behavior, trust, and data protection. Design choices in user interfaces and language models can unknowingly threaten user privacy and create risks to both privacy and legal compliance. This talk will examine the limitations of current regulatory approaches in safeguarding user privacy throughout the conversational AI interaction stages. I begin with reviewing research on legal equivalence in NLP, highlighting social biases and fairness concerns in model training. I then focus on the privacy harms and risks that users perceive in LLM-based chatbot interactions, including the impact of anthropomorphic persona effects. Additionally, I highlight how key compliance actors, such as legal professionals, struggle to navigate these privacy complexities to align with regulatory requirements. Finally, I outline future directions for ethical AI governance, addressing challenges and privacy-centered mitigation strategies in multimodal conversational AI systems, and advocating for AI privacy literacy education alongside regulatory reforms.
Ece Gumusel’s research employs mixed-methods to analyze user privacy dynamics in conversational AI, making contributions to usable privacy and security, privacy compliance, human-computer interaction, and social informatics. Her work has been published in leading international journals and conferences such as JASIST, JELIS, ASIS&T, iConference, ACM CCS, and ACL. In addition to her academic research, Gumusel has applied her expertise in real-world settings, working with organizations such as MITRE, NIST NCCoE, EDUCAUSE, and the University of Illinois Urbana-Champaign. Prior to pursuing her Ph.D., she was an academic researcher at the University of Illinois College of Law, where she earned her LL.M. in Intellectual Property and Technology Law.